Home
Jobs

172 Snowflake Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

12 - 15 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Responsibilities Actively participate in chapter ceremony meetings and contribute to project planning and estimation. Coordinate work with product managers, data owners, platform teams, and other stakeholders throughout the SDLC cycle. Use Airflow, Python, Snowflake, dbt, and related technologies to enhance and maintain EDP acquisition, ingestion, processing, orchestration and DQ frameworks. Adopt new tools and technologies to enhance framework capabilities. Build and conduct end-to-end tests to ensure production operations run successfully after every release cycle. Document and present accomplishments and challenges to internal and external stakeholders. Demonstrate deep understanding of modern data engineering tools and best practices. Design and build solutions which are performant, consistent, and scalable. Contribute to design decisions for complex systems. Provide L2 / L3 support for technical and/or operational issues. Qualifications At least 5+ years experience as a data engineer Expertise with SQL, stored procedures, UDFs Advanced level Python programming or Advanced Core Java programming. Experience with Snowflake or similar cloud native databases Experience with orchestration tools, especially Airflow Experience with declarative transformation tools like dbt Experience in Azure services, especially ADLS (or equivalent) Exposure to real time streaming platforms and message brokers (e.g., Snowpipe Streaming, Kafka) Experience with Agile development concepts and related tools (ADO, Aha) Experience conducting root cause analysis and solve issues Experience with performance tuning Excellent written and verbal communication skills Ability to operate in a matrixed organization and fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances Bachelors degree in computer science is strongly preferred

Posted 5 days ago

Apply

2.0 - 4.0 years

2 - 4 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

What We re Looking For 2-4 years of experience in data engineering, data extraction, web scraping, or unstructured data processing. Strong proficiency in Python, Pandas/NumPy, Regex Text Processing, Shell scripting/Bash. Familiarity with web scraping tools or Beautiful Soup and data governance. Knowledge of frontend/ backend development (React, APIs, Python Flask or FastAPI, databases, cloud technologies) is a plus. Someone capable of working with unstructured or alternative data sources Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions along with experience with Snowflake for data modeling and performance tuning Experience working in a fast-paced environment with evolving priorities Effective communication and an ability to collaborate across technical and non-technical teams. Data product lifecycle management from acquisition to QC and delivery is a plus. Strong problem-solving skills with attention to detail and a proactive approach

Posted 5 days ago

Apply

2.0 - 3.0 years

2 - 3 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

What We re Looking For 3+ years of experience in data engineering or data product operations Proficiency in Python and SQL, especially in Snowflake and BigQuery Experience with web scraping frameworks and data governance Someone capable of working with unstructured or alternative data sources Competence in deploying solutions on Google Cloud Platform (GCP), particularly BigQuery and Cloud Functions along with experience with Snowflake for data modeling and performance tuning. Knowledge of frontend/ backend development (React, APIs, Python Flask or FastAPI, databases, cloud technologies) is a plus Skills in ETL/ELT pipeline development and automated workflows Strong problem-solving skills and attention to detail. Expertise in anomaly detection, data drift, or schema changes Perspective of treating datasets as products. Lay SLAs, roadmaps, and metrics that can derive change Project management experience using Agile, Kanban, or similar methodologies Excellent documentation skills for technical specs, runbooks, and SOPs Effective communication skills for collaboration between data engineering and business teams Leadership readiness, demonstrated through mentoring, roadmap planning, and team coordination.

Posted 5 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Key Responsibilities: Data Pipeline Development : Assist in the design and implementation of data pipelines to extract, transform, and load (ETL) data from various sources into data warehouses or databases. Data Quality Assurance : Monitor and ensure the quality and integrity of data throughout the data lifecycle, identifying and resolving any data discrepancies or issues. Collaboration & Analysis : Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions that meet business needs as well as perform analyses aligned to anchor domain. Documentation : Maintain clear and comprehensive documentation of data processes, pipeline architectures, and data models for reference and training purposes. Performance Optimization : Help optimize data processing workflows and improve the efficiency of existing data pipelines. Support Data Infrastructure : Assist in the maintenance and monitoring of data infrastructure, ensuring systems are running smoothly and efficiently. Learning and Development : Stay updated on industry trends and best practices in data engineering, actively seeking opportunities to learn and grow in the field. Qualifications Education: Bachelor's degree in Computer Science, Data Science, Information Technology, or a related field preferred; relevant coursework or certifications in data engineering or programming is a plus. Technical Skills: Familiarity with programming languages such as Python or JavaScript; knowledge of SQL and experience with databases (e.g., Snowflake, MySQL, or PostgreSQL) is preferred. Data Tools: Exposure to data processing frameworks and tools (e.g. PBS/Torque, Slurm, or Airflow) is a plus. Analytical Skills: Strong analytical and problem-solving skills, with a keen attention to detail. Communication Skills: Excellent verbal and written communication skills, with the ability to convey technical information clearly to non-technical stakeholders. Team Player: Ability to work collaboratively in a team environment and contribute to group projects. Adaptability: Willingness to learn new technologies and adapt to changing priorities in a fast-paced environment.

Posted 5 days ago

Apply

2.0 - 4.0 years

2 - 4 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Job description Participate in client engagements in enhancing current statistical models and develop new models for business needs or design of experiments for test/control analysis and business process trials. Consult with cross-functional teams on matters relating to Machine Learning, knowledge discover, data modeling, and analytics. Own predictive models/ML and DOE/A-B testing areas of production ready ML models hosted across multiple systems. Develop business context for environment and ML uses/applications and deep knowledge of data inputs, outputs, and statistical testing/modeling Write/Run data extraction algorithms to acquire data from primary or secondary data sources and ability to describe/direct data requests for representative data necessary for analyses Develop statistical tests and predictive solutions to make business recommendations for decisioning Train/develop models, run evaluation experiments, and perform statistical analysis of results, refine and test / validate models in production Develop understanding of data framework and how it relates to business use, specific process time points, and make recommendations for any new data needs. Coordinate with data engineers to ensure data is representative of analysis solutions. Use of data analytics and other strategies that optimize statistical efficiency and quality Interpret data, analyze results using statistical techniques and provide ongoing reports Identify, analyze, and interpret trends or patterns in complex data sets Work with management to prioritize business and information needs Perform Ad Hoc Data Analysis and reporting for model performance Locate and define new process improvement opportunities for testing and predictive modeling Qualifications Required / Desired Skills Advanced degree in one or more quantitative discipline Operations Research, Stats, Math, Comp Sci, Engineering, Economics or similar Experience in developing a variety of machine learning models algorithms in a commercial environment with a track record of creating meaningful business impact. Proficient with Pyspark, noSQL and Python and distributed programming Expertise working in MongoDb, Snowflake, Databricks and cloud computing platforms (AWS, GCP or Azure), or equivalent on-premise platform and deployment Experience in client engagements, interpreting client s business challenges, and recommendations for statistical analysis solutions (ie analytical consulting and solution design) Experience in presentation design, development, delivery, and communication skills to present analytical results and recommendations for action-oriented data driven decisions and associated operational and financial impacts. Experience in Gen AI, LLM Workflow, Graph RAG etc is an added advantage. Flexible to work in shift model.

Posted 5 days ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

We are looking for a person to join the Advanced Data Analytics team with AFE Single Security.Advanced Data Analytics is a team of Quantitative Data and Product Specialists, focused on delivering Single Security Data Content, Governance and Product Solutions and Research Platform.The teamleveragesdata, cloud, and emerging technologies in buildinganinnovativedata platform, with the focus on business and research use cases in theSingleSecurity space.The team uses variousstatistical/mathematicalmethodologies to derive insights and generate content to help develop predictive models, clustering, and classification solutions and enable Governance.The team works on Mortgage, Structured & Credit Products. We are looking for a person to help build andexpandData & Analytics Content in the Credit space.The person willbe responsible forbuilding,enhancing,andmaintainingthe Credit Content Suite.The person willwork on the below- Credit DerivedData Content Model & DataGovernance Credit Model & Analytics Experience Experience on Scala Knowledge of ETL, data curation and analytical jobs using distributed computing framework with Spark Knowledge and Experience of working with large enterprisedatabaseslike Snowflake, Cassandra& Cloud manged services likeDataproc, Databricks Knowledge of financial instruments like Corporate Bonds, Derivatives etc. Knowledge of regression methodologies Aptitude for design and building tools forDataGovernance Pythonknowledge is a plus Qualifications Bachelors/masters in computer sciencewitha majorin Math,Econ,or related field 7+ years of relevant experience

Posted 5 days ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Key Responsibilities: Data Transformation and Analysis Design, develop, migrate, test and deploy Matillion ETL jobs in accordance with project requirements. Should have strong SQL knowledge to write complex queries. Design, develop and implement scalable data processing solutions. Collaborate with Data and BI team to understand data integration needs and translate them into Matillion ETL solutions. Ensure data quality, integrity and consistency throughout the ETL pipeline. Integrate data from different systems and sources to provide a unified view for analytical purpose. Analyze the business requirement for determination of volume of data extracted from different sources, data models, to ensure the quality of data involved. Should be able to figure out the best storage medium for data warehouse needed. Must ensure data quality that everything is in place at the transformation stage to eliminate errors and fix unstructured data Responsible for ensuring that data is transformed and loaded into the warehouse system as per business needs and standards and ultimately create and manage a secure database warehouse. Collaboration and Stakeholder Engagement : Act as a liaison between technical teams and business units to align data management efforts with organizational goals. Qualifications: Education : Master s or Bachelor s degree in Business Analytics, Data Science, Computer Science, Engineering or a related field. Experience : 4+ years of experience in implementing ETL processes to extract, transform and load data from various sources using Matillion Must be expert in ETL processes Hands on experience with Snowflake and Matillion tool. Skills : Expert in Matillion to perform ETL. Proficient in Snowflake Proficiency in SQL and Python to develop data processing and analysis scripts. Knowledge in data visualization platforms such as Power BI would be good to have. Excellent problem-solving and communication skills with the ability to work cross-functionally

Posted 5 days ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Responsibilities Actively participate in chapter ceremony meetings and contribute to project planning and estimation. Coordinate work with product managers, data owners, platform teams, and other stakeholders throughout the SDLC cycle. Use Airflow, Python, Snowflake, dbt, and related technologies to enhance and maintain EDP acquisition, ingestion, processing, orchestration and DQ frameworks. Adopt new tools and technologies to enhance framework capabilities. Build and conduct end-to-end tests to ensure production operations run successfully after every release cycle. Document and present accomplishments and challenges to internal and external stakeholders. Demonstrate deep understanding of modern data engineering tools and best practices. Design and build solutions which are performant, consistent, and scalable. Contribute to design decisions for complex systems. Provide L2 / L3 support for technical and/or operational issues. Qualifications At least 5+ years experience as a data engineer Expertise with SQL, stored procedures, UDFs Advanced level Python programming or Advanced Core Java programming. Experience with Snowflake or similar cloud native databases Experience with orchestration tools, especially Airflow Experience with declarative transformation tools like dbt Experience in Azure services, especially ADLS (or equivalent) Exposure to real time streaming platforms and message brokers (e.g., Snowpipe Streaming, Kafka) Experience with Agile development concepts and related tools (ADO, Aha) Experience conducting root cause analysis and solve issues Experience with performance tuning Excellent written and verbal communication skills Ability to operate in a matrixed organization and fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances Bachelors degree in computer science is strongly preferred

Posted 5 days ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Delhi, India

On-site

Foundit logo

Key Responsibilities: Data Transformation and Analysis Design, develop, migrate, test and deploy Matillion ETL jobs in accordance with project requirements. Should have strong SQL knowledge to write complex queries. Design, develop and implement scalable data processing solutions. Collaborate with Data and BI team to understand data integration needs and translate them into Matillion ETL solutions. Ensure data quality, integrity and consistency throughout the ETL pipeline. Integrate data from different systems and sources to provide a unified view for analytical purpose. Analyze the business requirement for determination of volume of data extracted from different sources, data models, to ensure the quality of data involved. Should be able to figure out the best storage medium for data warehouse needed. Must ensure data quality that everything is in place at the transformation stage to eliminate errors and fix unstructured data Responsible for ensuring that data is transformed and loaded into the warehouse system as per business needs and standards and ultimately create and manage a secure database warehouse. Collaboration and Stakeholder Engagement : Act as a liaison between technical teams and business units to align data management efforts with organizational goals. Qualifications: Education : Master s or Bachelor s degree in Business Analytics, Data Science, Computer Science, Engineering or a related field. Experience : 4+ years of experience in implementing ETL processes to extract, transform and load data from various sources using Matillion Must be expert in ETL processes Hands on experience with Snowflake and Matillion tool. Skills : Expert in Matillion to perform ETL. Proficient in Snowflake Proficiency in SQL and Python to develop data processing and analysis scripts. Knowledge in data visualization platforms such as Power BI would be good to have. Excellent problem-solving and communication skills with the ability to work cross-functionally

Posted 5 days ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Foundit logo

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Manager - WD Adaptive In this role you will be responsible for Workday adaptive development. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big-data technologies Key Responsibilities: . Workday Adaptive development . Prepare High Level Design and ETL design . Creation and support of batch and real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena . Able to design AWS ETL workflows and ETL mapping. . Maintain large ETL workflows, Review and test ETL programs . Experience in AWS Athena and Glue Pyspark , EMR, DynamoDB, Redshift, Kinesis, Lambda, Snowflake Qualifications we seek in you! Minimum Qualifications Education: Bachelor&rsquos degree in computer science, Engineering, or a related field (or equivalent experience) Relevant yea r s of experience in Workday Adaptive development Experience in Creation and support of batch and real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena Experience in preparing High Level Design and ETL design. Maintain large ETL workflows, Review and test ETL programs. Preferred Qualifications/ Skills . Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB . Should have experience in python, java . Should have performed ETL developer role in at least 3 large end to end projects . Should have good experience in performance tuning of ETL programs, debugging . Should have good experience in database , data warehouse concepts ,SCD1,SDC 2, SQLs Why join Genpact Lead AI-first transformation - Build and scale AI solutions that redefine industries Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career &mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 6 days ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description : E xperience in IT industry W orking experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL /ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be add ed an advantage Roles and Responsibilities : Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake , developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snow p ipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs , Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system . Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python / Pyspark . integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security . Should have good experience in implementing CDC or SCD type - 2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analys is, designing, development, and deployment . Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer . Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 6 days ago

Apply

10.0 - 12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Job Category Sales Job Details About Salesforce We're Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too - driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good - you've come to the right place. Overview of the Role: We have an outstanding opportunity for an expert AI and Data Cloud Solutions Engineer to work with our trailblazing customers in crafting ground-breaking customer engagement roadmaps demonstrating the Salesforce applications, platform across the machine learning and LLM/GPT domains in India! The successful applicant will have a track record in driving business outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology groups. Responsibilities: Primary pre-sales technical authority for all aspects of AI usage within the Salesforce product portfolio - existing Einstein ML based capabilities and new (2023) generative AI Majority of time (60%+) will be customer/external facing Evangelisation of Salesforce AI capabilities Assessing customer requirements and use cases and aligning to these capabilities Solution proposals, working with Architects and wider Solution Engineer (SE) teams Preferred Qualifications: expertise in an AI related subject (ML, deep learning, NLP etc.) Familiar with technologies such as OpenAI, Google Vertex, Amazon Sagemaker, Snowflake, Databricks etc Required Qualifications: Experience will be evaluated based on the core proficiencies of the role. 4+ years working directly in the commercial technology space with AI products and solutions. Data knowledge - Data science, Data lakes and warehouses, ETL, ELT, data quality AI knowledge - application of algorithms and models to solve business problems (ML, LLMs, GPT) 10+ years working in a sales, pre-sales, consulting or related function in a commercial software company Strong focus and experience in pre-sales or implementation is required. Experience in demonstrating Customer engagement solution, understand and drive use cases, customer journeys, ability to draw Day in life of across different LOBs. Business Analysis/ Business case/return on investment construction. Demonstrable experience in presenting and communicating complex concepts to large audiences A broad understanding of and ability to articulate the benefits of CRM, Sales, Service and Marketing cloud offerings Strong verbal and written communications skills with a focus on needs analysis, positioning, business justification, and closing techniques. Continuous learning demeanor with a demonstrated history of self enablement and advancement in both technology and behavioural areas. Building reference models/ideas/approaches for inclusion of GPT based products within wider Salesforce solution architectures, especially involving Data Cloud Alignment with customer security and privacy teams on trust capabilities and values of our solution(s) Presenting at multiple customer events from single account sessions through to major strategic events (World Tour, Dreamforce) Representing Salesforce at other events (subject to PM approval) Sales and SE organisation education and enablement e.g. roadmap - all roles across all product areas Bridge/primary contact point to product management Provide thought leadership in how large enterprise organisation can drive customer success through digital transformation. Ability to uncover the challenges and issues a business is facing by running successful and targeted discovery sessions and workshops. Be an innovator who can build new solutions using out-of-the-box thinking. Demonstrate business value of our AI solutions to business using solution presentations, demonstrations and prototypes. Build roadmaps that clearly articulate how partners can implement and accept solutions to move from current to future state. Deliver functional and technical responses to RFPs/RFIs. Work as an excellent teammate by chipping in, learning and sharing new knowledge. Demonstrate a conceptual knowledge of how to integrate cloud applications to existing business applications and technology. Lead multiple customer engagements concurrently. Be self-motivated, flexible, and take initiative.| Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 6 days ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 6 days ago

Apply

2.0 - 5.0 years

3 - 10 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Provide expertise in analysis, requirements gathering, design, coordination, customization, testing and support of reports, in client's environment Develop and maintain a strong working relationship with business and technical members of the team Relentless focus on quality and continuous improvement Perform root cause analysis of reports issues Development / evolutionary maintenance of the environment, performance, capability and availability. Assisting in defining technical requirements and developing solutions Effective content and source-code management, troubleshooting and debugging Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Tableau Desktop Specialist, SQL -Strong understanding of SQL for Querying database Good to have - Python ; Snowflake, Statistics, ETL experience. Extensive knowledge on using creating impactful visualization using Tableau. Must have thorough understanding of SQL & advance SQL (Joining & Relationships). Must have experience in working with different databases and how to blend & create relationships in Tableau. Must have extensive knowledge to creating Custom SQL to pull desired data from databases.Troubleshooting capabilities to debug Data controls Preferred technical and professional experience Troubleshooting capabilities to debug Data controls Capable of converting business requirements into workable model. Good communication skills, willingness to learn new technologies, Team Player, Self-Motivated, Positive Attitude. Must have thorough understanding of SQL & advance SQL (Joining & Relationships)

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 6 days ago

Apply

9.0 - 10.0 years

9 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Qualification Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects, models, tests, snapshots, and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users, permissions, job scheduling, and Git integration Onboarding and enablement of DBT users on DBT Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake, BigQuery, Redshift, or Databricks Cloud Platforms Use AWS, GCP, or Azure for data storage (e.g., S3, GCS), compute, and resource management Orchestration Tools Automate DBT runs using Airflow, Prefect, or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI/CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles, secrets, and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation, use dbt docs, and collaborate with data teams

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 6 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Title: Senior Data Engineer Key Responsibilities As a Senior Data Engineer, you will: Data Pipeline Development: Design, build, and maintain scalable data pipelines using PySpark and Python. AWS Cloud Integration: Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. ETL Workflow Management: Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Data Warehousing: Design and manage data models in Snowflake, ensuring performance and reliability. SQL Optimization: Utilize SQL for querying and optimizing datasets across different databases. Data Integration: Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaboration & Support: Collaborate with data scientists, analysts, and other engineers to support advanced analytics and Machine Learning (ML) initiatives. Data Quality & Governance: Ensure data quality, lineage, and governance through best practices and tools. Mandatory Skills & Experience Strong programming skills in Python and PySpark . Hands-on experience with AWS data services (S3, Lambda, Glue, EMR, Redshift). Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB , Kafka , and data streaming concepts. Good understanding of data architecture, data modeling, and data governance . Familiarity with large-scale data platforms. Essential Professional Skills Excellent problem-solving skills . Ability to work independently or as part of a team . Experience with CI/CD and DevOps practices in a data engineering environment (Plus). Qualifications Proven hands-on experience working with large-scale data platforms . Strong background in Python, PySpark, AWS , and modern data warehousing tools such as Snowflake and DBT . Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka.

Posted 6 days ago

Apply

7.0 - 9.0 years

7 - 9 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Design and develop end-to-end Master Data Management solutions using Informatica MDM Cloud Edition or on-prem hybrid setups. Implement match & merge rules, survivorship, hierarchy management, and data stewardship workflows. Configure landing, staging, base objects, mappings, cleanse functions, match rules, and trust/survivorship rules. Integrate MDM with cloud data lakes/warehouses (e.g., Snowflake, Redshift, Synapse) and business applications. Design batch and real-time integration using Informatica Cloud (IICS), APIs, or messaging platforms. Work closely with data architects and business analysts to define MDM data domains (e.g., Customer, Product, Vendor). Ensure data governance, quality, lineage, and compliance standards are followed. Provide production support and enhancements to existing MDM solutions. Create and maintain technical documentation, test cases, and deployment artifacts

Posted 6 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Validate data within the Snowflake database, ensuring data accuracy and consistency. Develop comprehensive test cases based on project requirements, ensuring thorough test coverage Automate End-to-End testing using Behave framework for cloud-based applications Perform regression testing using Automation before each deployment, minimizing the risk of introducing new defects. Monitored EMR file processing, identified and reported failures, contributing to improved data reliability Validate the data counts for received and processed records, contributing to accurate data reporting. Analyze requirements during analysis phase and provide solution build strong application Early performance testing engagements including planning, risk assessment and performance engineering etc. Conducted user acceptance testing with product teams and customers, ensuring project deliverables met requirements Required Skills : 5+ experience in Required skills : ETL Testing, Database Testing, Experience in Snowflake, AWS JIRA Agile, MySQL, Google Cloud Platforms, SQL ,Putty . Well versed with Functional Testing concepts and Automation Testing using Selenium. Hands on experience in test automation script using Java /Python

Posted 6 days ago

Apply

10.0 - 15.0 years

17 - 34 Lacs

Remote, , India

On-site

Foundit logo

9-11 years experience developing data management solutions, including warehouse design, building, and data ingestion/ETL Experience leading a team of multi-disciplined developers to deliver data analytics solutions using Agile processes 5-7 years of relevant experience in experience inSql, PL/SQL,Python,ETL/ELT ,Data modeling 3-5 years experience working with Cloud data platforms such as Snowflake, Redshift, Databricks etc 2-3 years experience in AWS (DMS,Glue,Lambda) Experience with Life Sciences related business process domains, such as R&D, manufacturing, quality, and supply chain is beneficial Strong data analysis skills Experience with data modelling, especially for warehouses and repositories Experience with SQL and relational databases Development expertise with data management design principles Ability to communicate well with both business owners and technical staffs, at the appropriate level for both Team management experience Scrum Master experience in Agile (SAFe experience a plus) Experience with Snowflake, Oracle databases, and Information PowerCenter and/or IICS Experience with analytics business intelligence solution

Posted 6 days ago

Apply

4.0 - 6.0 years

16 - 18 Lacs

Remote, , India

On-site

Foundit logo

3-5 years of relevant experience in experience inSql,PL/SQL,Python 2-3 years experience working with Cloud data platforms such as Snowflake,Redshift,Databricks etc 2-3 years experience in AWS (DMS,Glue,Lambda) 3-5 years experience in ETL/ELT ,Datamodelling such as Star Schema etc. Programming skills for data analytics.(PL/SQL,Python) Business Analysis / Requirements development Data Modelling(Entity Relationships, Star ,Snowflake Schema) Data Analysis Solution design & architecture for data products Strong technical and business communication skills Agile Methodologies Strong Interpersonal, Verbal and Written Communication Exceptional Analytical, Problem-Solving, and Troubleshooting Abilitie

Posted 6 days ago

Apply

5.0 - 7.0 years

16 - 18 Lacs

Remote, , India

On-site

Foundit logo

5-7 years of relevant experience in experience inSql,PL/SQL,Python 3-5 years experience working with Cloud data platforms such as Snowflake,Redshift,Databricks etc 2-3 years experience in AWS (DMS,Glue,Lambda) 5-7 years experience in ETL/ELT ,Datamodelling such as Star Schema etc. Programming skills for data analytics.(PL/SQL,Python) Business Analysis / Requirements development Data Modelling(Entity Relationships, Star ,Snowflake Schema) Data Analysis Solution design & architecture for data products Strong technical and business communication skills Agile Methodologies Strong Interpersonal, Verbal and Written Communication Exceptional Analytical, Problem-Solving, and Troubleshooting Abilitie

Posted 6 days ago

Apply

4.0 - 8.0 years

1 - 2 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Description We are seeking a Senior Data Engineer with expertise in Snowflake, Informatica, and ETL solutions to join our dynamic team. The ideal candidate will have 4-8 years of experience in data engineering and a strong background in developing and optimizing data pipelines. You will play a critical role in transforming raw data into actionable insights and ensuring data integrity across our platforms. If you are passionate about data and have a knack for problem-solving, we would love to hear from you. Responsibilities Design and implement robust ETL solutions using Informatica to extract, transform, and load data into Snowflake. Develop and optimize data pipelines and workflows in Snowflake to ensure efficient data processing. Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications. Monitor and troubleshoot data workflows and resolve issues in a timely manner. Ensure data quality and integrity through rigorous testing and validation processes. Perform data modeling and data architecture design to support analytical and reporting needs. Document data engineering processes and maintain comprehensive technical documentation. Skills and Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 4-8 years of experience in data engineering, with a focus on Snowflake and ETL solutions. Proficient in Snowflake data warehousing platform and its ecosystem. Strong experience with Informatica PowerCenter or Informatica Cloud for ETL processes. Solid understanding of SQL and experience with writing complex queries. Familiarity with data modeling concepts and best practices. Knowledge of Python or other programming languages for data manipulation and automation. Experience in cloud platforms (AWS, Azure, GCP) is a plus. Ability to work in a fast-paced environment and manage multiple priorities effectively. Excellent analytical and problem-solving skills.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies